Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
1.
Anesth Analg ; 133(5): 1331-1341, 2021 Nov 01.
Article in English | MEDLINE | ID: covidwho-1566542

ABSTRACT

In 2020, the coronavirus disease 2019 (COVID-19) pandemic interrupted the administration of the APPLIED Examination, the final part of the American Board of Anesthesiology (ABA) staged examination system for initial certification. In response, the ABA developed, piloted, and implemented an Internet-based "virtual" form of the examination to allow administration of both components of the APPLIED Exam (Standardized Oral Examination and Objective Structured Clinical Examination) when it was impractical and unsafe for candidates and examiners to travel and have in-person interactions. This article describes the development of the ABA virtual APPLIED Examination, including its rationale, examination format, technology infrastructure, candidate communication, and examiner training. Although the logistics are formidable, we report a methodology for successfully introducing a large-scale, high-stakes, 2-element, remote examination that replicates previously validated assessments.


Subject(s)
Anesthesiology/education , COVID-19/epidemiology , Certification/methods , Computer-Assisted Instruction/methods , Educational Measurement/methods , Specialty Boards , Anesthesiology/standards , COVID-19/prevention & control , Certification/standards , Clinical Competence/standards , Computer-Assisted Instruction/standards , Educational Measurement/standards , Humans , Internship and Residency/methods , Internship and Residency/standards , Specialty Boards/standards , United States/epidemiology
4.
PLoS One ; 16(8): e0254340, 2021.
Article in English | MEDLINE | ID: covidwho-1341496

ABSTRACT

The COVID-19 pandemic has impelled the majority of schools and universities around the world to switch to remote teaching. One of the greatest challenges in online education is preserving the academic integrity of student assessments. The lack of direct supervision by instructors during final examinations poses a significant risk of academic misconduct. In this paper, we propose a new approach to detecting potential cases of cheating on the final exam using machine learning techniques. We treat the issue of identifying the potential cases of cheating as an outlier detection problem. We use students' continuous assessment results to identify abnormal scores on the final exam. However, unlike a standard outlier detection task in machine learning, the student assessment data requires us to consider its sequential nature. We address this issue by applying recurrent neural networks together with anomaly detection algorithms. Numerical experiments on a range of datasets show that the proposed method achieves a remarkably high level of accuracy in detecting cases of cheating on the exam. We believe that the proposed method would be an effective tool for academics and administrators interested in preserving the academic integrity of course assessments.


Subject(s)
Education, Distance , Educational Measurement , Fraud , Lie Detection , Machine Learning , Algorithms , COVID-19/epidemiology , Datasets as Topic , Deception , Education, Distance/methods , Education, Distance/organization & administration , Educational Measurement/methods , Educational Measurement/standards , Humans , Models, Theoretical , Pandemics , SARS-CoV-2 , Universities
6.
Surgery ; 170(6): 1652-1658, 2021 12.
Article in English | MEDLINE | ID: covidwho-1316640

ABSTRACT

BACKGROUND: In surgical training, assessment tools based on strong validity evidence allow for standardized evaluation despite changing external circumstances. At a large academic institution, surgical interns undergo a multimodal curriculum for central line placement that uses a 31-item binary assessment at the start of each academic year. This study evaluated this practice within increased in-person learning restrictions. We hypothesized that external constraints would not affect resident performance nor assessment due to a robust curriculum and assessment checklist. METHODS: From 2018 to 2020, 81 residents completed central line training and assessment. In 2020, this curriculum was modified to conform to in-person restrictions and social distancing guidelines. Resident score reports were analyzed using multivariate analyses to compare performance, objective scoring parameters, and subjective assessments among "precoronavirus disease" years (2018 and 2019) and 2020. RESULTS: There were no significant differences in average scores or objective pass rates over 3 years. Significant differences between 2020 and precoronavirus disease years occurred in subjective pass rates and in first-time success for 4 checklist items: patient positioning, draping, sterile ultrasound probe cover placement, and needle positioning before venipuncture. CONCLUSION: Modifications to procedural training within current restrictions did not adversely affect residents' overall performance. However, our data suggest that in 2020, expert trainers may not have ensured learner acquisition of automated procedural steps. Additionally, although 2020 raters could have been influenced by logistical barriers leading to more lenient grading, the assessment tool ensured training and assessment integrity.


Subject(s)
Catheterization, Central Venous/standards , Educational Measurement/statistics & numerical data , General Surgery/education , COVID-19 , Educational Measurement/standards , General Surgery/standards , Humans
7.
Acad Med ; 96(9): 1236-1238, 2021 09 01.
Article in English | MEDLINE | ID: covidwho-1281877

ABSTRACT

The COVID-19 pandemic interrupted administration of the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) exam in March 2020 due to public health concerns. As the scope and magnitude of the pandemic became clearer, the initial plans by the USMLE program's sponsoring organizations (NBME and Federation of State Medical Boards) to resume Step 2 CS in the short-term shifted to long-range plans to relaunch an exam that could harness technology and reduce infection risk. Insights about ongoing changes in undergraduate and graduate medical education and practice environments, coupled with challenges in delivering a transformed examination during a pandemic, led to the January 2021 decision to permanently discontinue Step 2 CS. Despite this, the USMLE program considers assessment of clinical skills to be critically important. The authors believe this decision will facilitate important advances in assessing clinical skills. Factors contributing to the decision included concerns about achieving desired goals within desired time frames; a review of enhancements to clinical skills training and assessment that have occurred since the launch of Step 2 CS in 2004; an opportunity to address safety and health concerns, including those related to examinee stress and wellness during a pandemic; a review of advances in the education, training, practice, and delivery of medicine; and a commitment to pursuing innovative assessments of clinical skills. USMLE program staff continue to seek input from varied stakeholders to shape and prioritize technological and methodological enhancements to guide development of clinical skills assessment. The USMLE program's continued exploration of constructs and methods by which communication skills, clinical reasoning, and physical examination may be better assessed within the remaining components of the exam provides opportunities for examinees, educators, regulators, the public, and other stakeholders to provide input.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Licensure, Medical/standards , COVID-19/prevention & control , Educational Measurement/standards , Humans , Licensure, Medical/trends , United States
8.
Acad Med ; 96(9): 1239-1241, 2021 09 01.
Article in English | MEDLINE | ID: covidwho-1254863

ABSTRACT

The discontinuation of the United States Medical Licensing Examination Step 2 Clinical Skills (CS) in 2020 in response to the COVID-19 pandemic marked the end of a decades-long debate about the utility and value of the exam. For all its controversy, the implementation of Step 2 CS in 2004 brought about profound changes to the landscape of medical education, altering the curriculum and assessment practices of medical schools to ensure students were prepared to take and pass this licensing exam. Its elimination, while celebrated by some, is not without potential negative consequences. As the responsibility for assessing students' clinical skills shifts back to medical schools, educators must take care not to lose the ground they have gained in advancing clinical skills education. Instead, they need to innovate, collaborate, and share resources; hold themselves accountable; and ultimately rise to the challenge of ensuring that physicians have the necessary clinical skills to safely and effectively practice medicine.


Subject(s)
Clinical Competence/standards , Educational Measurement/methods , Licensure, Medical/standards , COVID-19/prevention & control , Education, Medical, Undergraduate/standards , Education, Medical, Undergraduate/trends , Educational Measurement/standards , Humans , Licensure, Medical/trends , United States
9.
Biochem Mol Biol Educ ; 49(3): 457-463, 2021 05.
Article in English | MEDLINE | ID: covidwho-1116305

ABSTRACT

Objectively Structured Clinical/Practical Examination (OSCE/OSPE) has been the backbone of the assessment system of graduate medical education for over three decades. We have developed an electronic Objectively Structured Practical Examination (e-OSPE) in Medical Biochemistry using the freely available Google forms to mitigate the academic disruption posed by COVID-19 pandemic in our resource-poor setting. Ten e-OSPE stations created, interlinked, and time-restricted. Fifty undergraduate students appeared for the e-OSPE examination on a prefixed date and time. Learner feedback was collected immediately after the completion of the examination. Facilitator feedback was also collected. Students' mean scores in e-OSPE and traditional OSPE were 78.15% and 74.56%, respectively. Their difference was not statistically significant (paired t-test two-tailed p-value 0.0979). Thus, the results of e-OSPE are reliable as compared to traditional OSPE. Bland Altman Plot revealed 92% of students had scores that were in the agreeable limit of both traditional OSPE and e-OSPE. Both the learners and facilitators were in consensus that the online format of e-OSPE is a good alternative for assessment (0.67 and 0.82); their experience was good (0.72 and 0.92) and conduction was well organized (0.73 and 0.86). Several suggestions were also received to make e-OSPE even more effective. In conclusion, this pilot study showed e-OSPE can be an effective alternative to traditional OSPE when "in-person" evaluation is not possible such as in the current era of COVID-19 even in resource-limited settings.


Subject(s)
Biochemistry/education , Education, Distance , Education, Medical, Undergraduate/standards , Educational Measurement/methods , Educational Measurement/standards , COVID-19/epidemiology , Curriculum , Humans , India/epidemiology , Online Systems , Pandemics , Pilot Projects , SARS-CoV-2 , Students, Medical , User-Computer Interface
11.
GMS J Med Educ ; 38(1): Doc1, 2021.
Article in English | MEDLINE | ID: covidwho-1110223

ABSTRACT

Introduction: In summer term 2020, the clinical phase of the undergraduate medical curriculum at University Medical Center Göttingen was restructured since distance teaching had to be used predominantly due to contact restrictions during the COVID-19 pandemic. This paper investigates the impact of restructuring the clinical curriculum on medical students' satisfaction and learning outcomes. Methods: In each cohort, the 13-week curriculum was divided into two parts: During the first 9 weeks, factual knowledge was imparted using distance teaching by means of a modified inverted classroom approach. This was followed by a 4-week period of adapted classroom teaching involving both real and virtual patients in order to train students' practical skills. The evaluation of the 21 clinical modules comprised students' satisfaction with distance teaching as well as students' learning outcome. The latter was assessed by means of comparative self-assessment (CSA) gain and the results of the module exams, respectively. Data of summer term 2020 (= distance teaching, DT) were compared with respective data of winter term 2019/20 (= classroom teaching, CT) and analysed for differences and correlations. Results: Response rates of evaluations were 51.3% in CT and 19.3% in DT. There was no significant difference between mean scores in module exams in CT and DT, respectively. However, CSA gain was significantly lower in DT (p=0.047) compared with CT. Further analyses revealed that CSA gain depended on the time point of data collection: CSA gain was lower the more time had passed since the end of a specific module. Moreover, we found positive correlations between CSA gain and students' satisfaction with various aspects of distance teaching, particularly with "communication between teachers and students" (rho=0.674; p=0.002). Discussion and conclusions: Although some limitations and confounding factors have to be taken into account (such as evaluation response rates, assessment time points, and proportion of familiar items in module exams), the following recommendations can be derived from our findings: A valid assessment of students' learning outcome by means of exam results requires that as few exam items as possible are familiar to the students. CSA gain seems to be valid if assessment time points are standardised and not contaminated by students' learning activities for other modules. Good communication between teachers and students may contribute to increase students' satisfaction with distance teaching.


Subject(s)
Academic Medical Centers/organization & administration , COVID-19/epidemiology , Education, Medical, Undergraduate/organization & administration , Personal Satisfaction , Students, Medical/psychology , Clinical Competence , Communication , Curriculum , Education, Distance , Educational Measurement/methods , Educational Measurement/standards , Humans , Pandemics , Problem-Based Learning/organization & administration , SARS-CoV-2 , Virtual Reality
13.
Adv Physiol Educ ; 45(1): 84-88, 2021 Mar 01.
Article in English | MEDLINE | ID: covidwho-1105547

ABSTRACT

Medical education has gone online because of the COVID-19 pandemic. Formative assessment is essential to facilitate the learning process in medical education. However, various challenges arise during online assessment, which include reliability, when done without monitoring and practical concerns like Internet connectivity issues. This study was done to assess the medical students' perceptions of the reliability, usefulness, and practical challenges of online tests. One hundred first-year undergraduate medical students taking up online classes and tests in the subject of physiology were enrolled in this study. A questionnaire with items regarding practical challenges, reliability, and usefulness of the online tests, in general, and about different types of online assessment methods, in particular, were sent to the students online. Each item was rated on a five-point Likert scale, and the responses were analyzed anonymously. A large percentage of students used mobile phones (81.4%) to undertake online tests. Although most students (73.2%; P < 0.001) felt that online tests helped them substantially in learning the subject, network connectivity issues were considered to be a matter of serious concern (85.5%, P < 0.001). Among the assessment methods used, viva voce by video conferencing was thought to be most reliable (83%, P < 0.001). Multiple-choice question-based assessment when done online was felt to be more practically feasible with faster feedback than classroom assessment. The results of the study suggest that medical students find online formative assessments helpful for their learning, despite their concerns about reliability and practical challenges.


Subject(s)
Education, Distance/standards , Education, Medical/standards , Educational Measurement/standards , Students, Medical/psychology , Surveys and Questionnaires , COVID-19 , Education, Distance/methods , Education, Medical/methods , Educational Measurement/methods , Feasibility Studies , Female , Humans , Male , Reproducibility of Results
15.
BMC Med Educ ; 21(1): 86, 2021 Feb 02.
Article in English | MEDLINE | ID: covidwho-1061007

ABSTRACT

BACKGROUND: The use of remote online delivery of summative assessments has been underexplored in medical education. Due to the COVID-19 pandemic, all end of year applied knowledge multiple choice question (MCQ) tests at one UK medical school were switched from on campus to remote assessments. METHODS: We conducted an online survey of student experience with remote exam delivery and compared test performance in remote versus invigilated campus-based forms of similar assessments for Year 4 and 5 students across two academic years. RESULTS: Very few students experienced technical or practical problems in completing their exam remotely. Test anxiety was reduced for some students but increased for others. The majority of students preferred the traditional setting of invigilated exams in a computer lab, feeling this ensured an even playing field for all candidates. Mean score was higher for Year 4 students in the remotely-delivered versus campus-based form of the same exam (76.53% [SD 6.57] vs. 72.81% [6.64]; t438.38 = 5.94, p = 0.001; d = 0.56), whereas candidate performance was equivalent across both forms for Year 5 students. CONCLUSIONS: Remote online MCQ exam delivery is an effective and generally acceptable approach to summative assessment, and could be used again in future without detriment to students if onsite delivery is not possible.


Subject(s)
Academic Performance , COVID-19 , Education, Distance/methods , Education, Medical, Undergraduate/methods , Educational Measurement/methods , Anxiety , COVID-19/epidemiology , Consumer Behavior , Educational Measurement/standards , Humans , Pandemics , SARS-CoV-2 , Students/psychology , United Kingdom/epidemiology
16.
GMS J Med Educ ; 37(7): Doc87, 2020.
Article in English | MEDLINE | ID: covidwho-971046

ABSTRACT

Objective: Primary outcome of this retrospective study was the comparison of state examination results under simulated treatment conditions in times of Covid-19 versus patient treatment under non-pandemic conditions. Additionally, correlation analysis was performed between students' self- and examiners' assessment of the treatment results. Methods: Within 4 hours, 22 examinees each had to place a multi-surface adhesive anterior and posterior restoration, performed an endodontic treatment on a maxillary premolar and a periodontal debridement of one quadrant. All treatments were performed on a model fixed in a phantom head. Compliance with the prescribed hygiene and social distancing guidelines and self-assessment of the practical performance was part of the practical examination as well. One examiner per examination part evaluated anonymously the final results. The historical control was based on the exam results of a cohort from 2019. Mean values (standard deviation), non-parametric correlations (Spearman's Rho) and group comparisons (Mann-Whitney) were calculated for statistical analysis. Results: Examination results under simulated treatment conditions were significantly worse (p<0.05) than in the cohort that took their state exam in patients, with exception of the endodontic partial exam. The overall scores in restorative dentistry and periodontology of both groups, which include a structured theoretical examination, did not differ. The majority of the candidates rated their performance worse than the examiners, and there was no correlation between self- and third-party assessment. Conclusion: In the comparison of two years, a simulated practical examination without patients in restorative dentistry, endodontics and periodontology resulted in matchable results compared with an examination on patients. Equal conditions for the candidates resulting in better comparability and avoidance of ethical dilemmas of patient treatment under examination conditions could also be arguments towards a state examination under phantom conditions in the future.


Subject(s)
COVID-19/epidemiology , Education, Dental/organization & administration , Education, Distance/organization & administration , Educational Measurement/statistics & numerical data , Dentists/education , Education, Dental/standards , Education, Distance/standards , Educational Measurement/standards , Endodontics/education , Humans , Models, Anatomic , Pandemics , SARS-CoV-2 , Self-Assessment , Students, Dental
17.
Front Public Health ; 8: 609599, 2020.
Article in English | MEDLINE | ID: covidwho-971923

ABSTRACT

In the wake of COVID-19, there is an urgent need for a diverse public health work force to address problems presented or exacerbated by the global pandemic. Educational programs that create our work force both train and shape the makeup of access through graduate applications. The Graduate Record Exam has a number of standing issues, with additional barriers created by the pandemic. We trace the GRE waiver movement over several years, focusing on the gradual adoption in CEPH accredited programs and the rapid expansion of temporary waivers as a response to testing access. Going forward, we need to consider gaps in waivers during the pandemic and how this data can be used to shape our future use of the GRE.


Subject(s)
COVID-19 , Education, Medical/statistics & numerical data , Education, Medical/standards , Educational Measurement/statistics & numerical data , Educational Measurement/standards , Public Health/education , School Admission Criteria/statistics & numerical data , Adult , Female , Humans , Male , SARS-CoV-2 , Students, Medical , United States , Young Adult
20.
J Med Imaging Radiat Sci ; 51(4): 610-616, 2020 12.
Article in English | MEDLINE | ID: covidwho-882627

ABSTRACT

INTRODUCTION: Online open book assessment has been a common alternative to a traditional invigilated test or examination during the COVID-19 pandemic. However, its unsupervised nature increases ease of cheating, which is an academic integrity concern. This study's purpose was to evaluate the integrity of two online open book assessments with different formats (1. Tightly time restricted - 50 min for mid-semester and 2. Take home - any 4 h within a 24-h window for end of semester) implemented in a radiologic pathology unit of a Bachelor of Science (Medical Radiation Science) course during the pandemic. METHODS: This was a retrospective study involving a review and analysis of existing information related to the integrity of the two radiologic pathology assessments. Three integrity evaluation approaches were employed. The first approach was to review all the Turnitin plagiarism detection software reports with use of 'seven-words-in-a-row' criterion to identify any potential collusion. The second approach was to search for highly irrelevant assessment answers during marking for detection of other cheating types. Examples of highly irrelevant answers included those not addressing question requirements and stating patients' clinical information not from given patient histories. The third approach was an assessment score statistical analysis through descriptive and inferential statistics to identify any abnormal patterns that might suggest cheating occurred. An abnormal pattern example was high assessment scores. The descriptive statistics used were minimum, maximum, range, first quartile, median, third quartile, interquartile range, mean, standard deviation, fail and full mark rates. T-test was employed to compare mean scores between the two assessments in this year (2020), between the two assessments in the last year (2019), between the two mid-semester assessments in 2019 and 2020, and between this and last years' end of semester assessments. A p-value of less than 0.05 was considered statistically significant. RESULTS: No cheating evidence was found in all Turnitin reports and assessment answers. The mean scores of the end of semester assessments in 2019 (88.2%) and 2020 (90.9%) were similar (p = 0.098). However, the mean score of the online open book mid-semester assessment in 2020 (62.8%) was statistically significantly lower than that of the traditional invigilated mid-semester assessment in 2019 (71.8%) with p < 0.0001. CONCLUSION: This study shows the use of the online open book assessments with tight time restrictions and the take home formats in the radiologic pathology unit did not have any academic integrity issues. Apparently, the strict assessment time limit played an important role in maintaining their integrity.


Subject(s)
COVID-19/prevention & control , Education, Distance/standards , Education, Medical, Undergraduate/standards , Educational Measurement/standards , Plagiarism , Radiology/education , Students, Medical/statistics & numerical data , Adult , Australia , Education, Distance/methods , Education, Distance/statistics & numerical data , Education, Medical, Undergraduate/methods , Education, Medical, Undergraduate/statistics & numerical data , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Female , Humans , Male , Medical Oncology/education , Pandemics , Retrospective Studies , Software , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL